23 research outputs found

    VisGuided: A Community-driven Approach for Education in Visualization

    Full text link
    We propose a novel educational approach for teaching visualization, using a community-driven and participatory methodology that extends the traditional course boundaries from the classroom to the broader visualization community.We use a visualization community project, VisGuides, as the main platform to support our educational approach. We evaluate our new methodology by means of three use cases from two different universities. Our contributions include the proposed methodology, the discussion on the outcome of the use cases, the benefits and limitations of our current approach, and a reflection on the open problems and noteworthy gaps to improve the current pedagogical techniques to teach visualization and promote critical thinking. Our findings show extensive benefits from the use of our approach in terms of the number of transferable skills to students, educational resources for educators, and additional feedback for research opportunities to the visualization community

    Complex model calibration through emulation, a worked example for a stochastic epidemic model

    Get PDF
    Uncertainty quantification is a formal paradigm of statistical estimation that aims to account for all uncertainties inherent in the modelling process of real-world complex systems. The methods are directly applicable to stochastic models in epidemiology, however they have thus far not been widely used in this context. In this paper, we provide a tutorial on uncertainty quantification of stochastic epidemic models, aiming to facilitate the use of the uncertainty quantification paradigm for practitioners with other complex stochastic simulators of applied systems. We provide a formal workflow including the important decisions and considerations that need to be taken, and illustrate the methods over a simple stochastic epidemic model of UK SARS-CoV-2 transmission and patient outcome. We also present new approaches to visualisation of outputs from sensitivity analyses and uncertainty quantification more generally in high input and/or output dimensions

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: epub 2022-08-15, ppub 2022-10-01Publication status: PublishedFunder: UK Research and Innovation; Grant(s): ST/V006126/1, EP/V054236/1, EP/V033670/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs-a series of ideas, approaches and methods taken from existing visualization research and practice-deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue 'Technical challenges of modelling real-life epidemics and examples of overcoming these'

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations

    Get PDF
    From The Royal Society via Jisc Publications RouterHistory: received 2021-10-14, accepted 2022-03-18, pub-electronic 2022-08-15, pub-print 2022-10-03Article version: VoRPublication status: PublishedFunder: UK Research and Innovation; Id: http://dx.doi.org/10.13039/100014013; Grant(s): EP/V033670/1, EP/V054236/1, ST/V006126/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs—a series of ideas, approaches and methods taken from existing visualization research and practice—deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’

    Slicing multi-dimensional spaces

    No full text
    Kontinuierliche Daten machen einen großen Teil der physikalischen Daten aus und zahlreiche Phänomene, die man untersuchen möchte werden von unterschiedlichen Faktoren beeinflusst. Um diese Phänomene zu verstehen, müssen mehrdimensionale kontinuierliche Daten untersucht werden. Die visuelle Analyse solcher Phänomene kann viele Erkenntnisse liefern. Es stellt sich jedoch die Frage, wie man etwas in mehr als drei Dimensionen auf einem 2D-Bildschirm darstellen soll. Ein Großteil der Analysetools für hochdimensionale Daten (mehr als drei Dimensionen) konzentriert sich auf diskrete Daten. Diese Methoden können jedoch nicht die gesamte Bandbreite des kontinuierlichen Prozesses darstellen. Die meisten Methoden zur Visualisierung kontinuierlicher Daten konzentrieren sich entweder auf ein bestimmtes Gebiet oder eine bestimmte Aufgabe (z.B. Optimierung). Im Rahmen dieser Dissertation suche ich nach Möglichkeiten, um Universaltools zur Analyse von mehrdimensionalen kontinuierlichen Daten zu schaffen. Dies tue ich mittels vier wesentlicher Problemlösungsschritte. Erstens führe ich eine Task Tax- onomy für mehrdimensionale kontinuierliche Daten ein. Zweitens untersuche ich die Verwendung von 1D-Scheiben, um mehrdimensionale kontinuierliche Funktio- nen zu verstehen. Drittens entwickelte ich einen Algorithmus, um 2D-Scheiben von Simplical Meshes zu erzeugen. Somit konnte ich zeigen, wie diese zum Begreifen von Formen genutzt werden können. Viertens habe ich einen Algorithmus entwickelt, um Scheiben in interaktiver Zeit zu erzeugen. Dieser Algorithmus macht sich die regelmäßige Geometrie des mehrdimensionalen Raumes zu Nutze, sowie die GPU- Architektur eines modernen Computers. Die Ergebnisse dieser Arbeit können als Basis für Forschungen an Methoden direkter Visualisierung von mehrdimensionalen kontinuierlichen Daten genutzt wer- den. Mit dieser Arbeit habe ich eine Diskussion über die damit verbundenen Auf- gaben begonnen sowie konkrete Beispiele dafür gegeben, wie die Visualisierungen anhand dieser Aufgaben beurteilt werden können. Ich hoffe, dass die Ergebnisse dieser Arbeit zu weiteren Forschungen an allgemeinen Methoden zur Visualisierung von mehrdimensionalen kontinuierlichen Daten führen.Many physical data are continuous and many phenomena that we want to study are influenced by a number of factors. To understand these phenomena we need to ex- amine multi-dimensional continuous data. Visual analysis of these phenomena can lead to many insights. However, the question remains of how to visualize something in more than three dimensions on a 2D screen. Most of the higher (i.e. more than three) dimensional data analysis tools have focused on discrete data. These methods cannot represent the full richness of the continuous process. Most continuous data vi- sualization methods focus on either a particular domain area or a particular task (e.g. optimization). In this thesis I explore the possibilities of creating general-purpose tools for multi- dimensional continuous data analysis. I do this through four key developments. First, I introduce a task taxonomy for continuous multi-dimensional data. Second, I in- vestigated the use of 1D slices to understand multi-dimensional continuous functions. Third, I developed an algorithm to generate 2D slices of simplical meshes and demon- strated how these can be used to understand shapes. Forth, I developed an algoritm to render slices in interactive time. The algorithm takes advantage of regular ge- ometry of the multi-dimensional space as well as the GPU architecture on a modern computer. The results of this work can be used as a basis for research on direct visualization methods of multi-dimesnsional continuous data. Through this work, I have started a discussion of the tasks involved and given concrete examples of how visualizations can be evaluated based on these tasks. My hope is that these developments will herald additional research on general methods for multi-dimensional continuous data visualization

    Visual analysis of high-dimensional parameter spaces

    Get PDF
    We present a system called Tuner to systematically analyze the parameter space of com- plex computer simulations, which are time consuming to run and consequently cannot be exhaustively sampled. We begin with a sparse initial sampling of the parameter space, then use these samples to create a fast emulator of the simulation. Analyzing this emulator gives the user insight on further sampling the simulation. Tuner guides the user through sampling and provides tools to find optimal parameter settings of up to two objective functions and perform sensitivity analysis. We present use-cases from the domain of image segmentation algorithms. Since our method must utilize samples of the simulation and relies on an inherently interactive visualization method, we perform a complexity analysis to see how many sam- ples can be rendered while staying interactive. We examined how rendering performance changes with the dimensionality, reconstruction kernel size, and number of sample points. To study this, we decomposed the rendering complexity into a predictive cost function that combines the cost of filtering each data point and then the cost to draw each pixel on screen. This cost function is calibrated to the time to filter and draw for two different hardware configurations. The cost formulation is used to examine the effects on rendering time from using box filtering versus a radial distance measure in high-dimensional data spaces as used for the filtered scatterplot and HyperSlice visualization methods, respectively. We find that for a constant kernel volume, rendering performance increases with dimensionality in the HyperSlice technique while it decreases with the filtered scatterplot technique. We also find that the total number of sample points and not the size of the reconstruction kernel is a much stronger determinant of the rendering time
    corecore